14 research outputs found

    Statistical Machine Learning for Human Behaviour Analysis

    Get PDF
    Human behaviour analysis has introduced several challenges in various fields, such as applied information theory, affective computing, robotics, biometrics and pattern recognition. This Special Issue focused on novel vision-based approaches, mainly related to computer vision and machine learning, for the automatic analysis of human behaviour. We solicited submissions on the following topics: information theory-based pattern classification, biometric recognition, multimodal human analysis, low resolution human activity analysis, face analysis, abnormal behaviour analysis, unsupervised human analysis scenarios, 3D/4D human pose and shape estimation, human analysis in virtual/augmented reality, affective computing, social signal processing, personality computing, activity recognition, human tracking in the wild, and application of information-theoretic concepts for human behaviour analysis. In the end, 15 papers were accepted for this special issue [1-15]. These papers, that are reviewed in this editorial, analyse human behaviour from the aforementioned perspectives, defining in most of the cases the state of the art in their corresponding field. Most of the included papers are application-based systems, while [15] focuses on the understanding and interpretation of a classification model, which is an important factor for the classifier's credibility. Given a set of categorical data, [15] utilizes multi-objective optimization algorithms, like ENORA and NSGA-II, to produce rule-based classification models that are easy to interpret. Performance of the classifier and its number of rules are optimized during the learning, where the first one is obviously expected to bemaximizedwhile the second one is expected to beminimized. Testing on public databases, using 10-fold cross-validation, shows the superiority of the proposed method against classifiers that are generated using other previously published methods like PART, JRip, OneR and ZeroR. Two published papers ([1,9]) have privacy as their main concern, while they develop their respective systems for biometrics recognition and action recognition. Reference [1] has considered a privacy-aware biometrics system. The idea is that the identity of the users should not be readily revealed from their biometrics, like facial images. Therefore, they have collected a database of foot and hand traits of users while opening a door to grant or deny access, while [9] develops a privacy-aware method for action recognition using recurrent neural networks. The system accumulates reflections of light pulses omitted by a laser, using a single-pixel hybrid photodetector. This includes information about the distance of the objects to the capturing device and their shapes

    Privacy-Constrained Biometric System for Non-cooperative Users

    Get PDF
    With the consolidation of the new data protection regulation paradigm for each individual within the European Union (EU), major biometric technologies are now confronted with many concerns related to user privacy in biometric deployments. When individual biometrics are disclosed, the sensitive information about his/her personal data such as financial or health are at high risk of being misused or compromised. This issue can be escalated considerably over scenarios of non-cooperative users, such as elderly people residing in care homes, with their inability to interact conveniently and securely with the biometric system. The primary goal of this study is to design a novel database to investigate the problem of automatic people recognition under privacy constraints. To do so, the collected data-set contains the subject's hand and foot traits and excludes the face biometrics of individuals in order to protect their privacy. We carried out extensive simulations using different baseline methods, including deep learning. Simulation results show that, with the spatial features extracted from the subject sequence in both individual hand or foot videos, state-of-the-art deep models provide promising recognition performance

    Cycle-consistent generative adversarial neural networks based low quality fingerprint enhancement

    Get PDF
    Distortions such as dryness, wetness, blurriness, physical damages and presence of dots in fingerprints are a detriment to a good analysis of them. Even though fingerprint image enhancement is possible through physical solutions such as removing excess grace on the fingerprint or recapturing the fingerprint after some time, these solutions are usually not user-friendly and time consuming. In some cases, the enhancements may not be possible if the cause of the distortion is permanent. In this paper, we are proposing an unpaired image-to-image translation using cycle-consistent adversarial networks for translating images from distorted domain to undistorted domain, namely, dry to not-dry, wet to not-wet, dotted to not-dotted, damaged to not-damaged, blurred to not-blurred. We use a database of low quality fingerprint images containing 11541 samples with dryness, wetness, blurriness, damages and dotted distortions. The database has been prepared by real data from VISA application centres and have been provided for this research by GEYCE Biometrics. For the evaluation of the proposed enhancement technique, we use VGG16 based convolutional neural network to assess the percentage of enhanced fingerprint images which are labelled correctly as undistorted. The proposed quality enhancement technique has achieved the maximum quality improvement for wetness fingerprints in which 94% of the enhanced wet fingerprints were detected as undistorted. © 2020, Springer Science+Business Media, LLC, part of Springer Nature

    Organ Segmentation in Poultry Viscera Using RGB-D

    Get PDF
    We present a pattern recognition framework for semantic segmentation of visual structures, that is, multi-class labelling at pixel level, and apply it to the task of segmenting organs in the eviscerated viscera from slaughtered poultry in RGB-D images. This is a step towards replacing the current strenuous manual inspection at poultry processing plants. Features are extracted from feature maps such as activation maps from a convolutional neural network (CNN). A random forest classifier assigns class probabilities, which are further refined by utilizing context in a conditional random field. The presented method is compatible with both 2D and 3D features, which allows us to explore the value of adding 3D and CNN-derived features. The dataset consists of 604 RGB-D images showing 151 unique sets of eviscerated viscera from four different perspectives. A mean Jaccard index of 78.11% is achieved across the four classes of organs by using features derived from 2D, 3D and a CNN, compared to 74.28% using only basic 2D image features

    PROJECT GROUP: CVG10-1025 PARTICIPANTS:

    No full text
    This project explores the possibility of creating a non obtrusive and intuitive gesture interface for wearable computers. A computer vision solution based on the CONDENSATION algorithm is proposed, and the underlying tracking algorithms designed and implemented. A new measurement model and contour likelihood function based on both the shape and color of the hand is presented. The hand tracking system is tested on scenes with varying lighting, motion blur, cluttered background and other skin colored objects. Even under these difficult conditions the performance is concluded to be good enough for wearable interface purposes

    Extremal Regions Detection Guided by Maxima of Gradient Magnitude

    No full text

    Organ Segmentation in Poultry Viscera Using RGB-D

    No full text
    We present a pattern recognition framework for semantic segmentation of visual structures, that is, multi-class labelling at pixel level, and apply it to the task of segmenting organs in the eviscerated viscera from slaughtered poultry in RGB-D images. This is a step towards replacing the current strenuous manual inspection at poultry processing plants. Features are extracted from feature maps such as activation maps from a convolutional neural network (CNN). A random forest classifier assigns class probabilities, which are further refined by utilizing context in a conditional random field. The presented method is compatible with both 2D and 3D features, which allows us to explore the value of adding 3D and CNN-derived features. The dataset consists of 604 RGB-D images showing 151 unique sets of eviscerated viscera from four different perspectives. A mean Jaccard index of 78.11% is achieved across the four classes of organs by using features derived from 2D, 3D and a CNN, compared to 74.28% using only basic 2D image features
    corecore